-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubevirt: Defer eve reboot/shutdown/update until drain completes #4494
base: master
Are you sure you want to change the base?
Conversation
An odd yetus failure, not sure what to make of this. |
3c4957b
to
9caa356
Compare
465e6a0
to
e3d900e
Compare
9cce3d4
to
9c6952e
Compare
yetus checking seems broken: 2025-01-18T02:42:21.7383936Z ./out has been created |
@@ -77,6 +77,7 @@ type getconfigContext struct { | |||
configGetStatus types.ConfigGetStatus | |||
updateInprogress bool | |||
readSavedConfig bool // Did we already read it? | |||
drainInProgress bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code reads as if this field has the semantics of waitForDrainComplete (or deferForDrain). Is that the semantics in zedagent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This follows the above updateInprogress usage to allow drain handling / deferred ops in zedagent.go:handleNodeAgentStatusImpl()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the name potentially confusing since this is really about waiting and not starting/triggering a drain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm following now, I can reword it
if ctx.poweroffCmdDeferred && | ||
drainInProgress && !status.DrainInProgress { | ||
log.Functionf("Drain complete and deferred poweroff") | ||
ctx.poweroffCmdDeferred = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the code which sets poweroffCmdDeferred due to a drain.
Did you miss a function which sets the various deferred fields due to the local profile server API being used? (I don't recall whether that is a separate function from the controller API.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW that code is in pkg/pillar/cmd/zedagent/localinfo.go and presumably it needs the drainInProgress checks for poweroff, shutdown, and reboot.
} | ||
|
||
log.Noticef("handleNodeDrainStatusImplNA to:%v", newStatus) | ||
// NodeDrainStatus Failures here should keep drainInProgress set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is something catastrophic happening to a cluster (e.g., two out of three nodes die due to hardware failures), is there a way to recover from that and make the single remaining node be usable e.g., as a one node cluster or as a standalone device?
It seems like a reboot might not be usable to recover since (based on this comment) you can't reboot until the drain has succeeded, and that is presumably impossible if the two other nodes died while you were trying to drain.
What happens when the drain timer fires?
Is there a recovery case which does not involve a truck roll and a reinstall of EVE in such a case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If only a single cluster node is usable then etcd will be unable to form a quorum and k3s won't be able to meet the ready state.
If the cluster is at the state of the local kubernetes node not running then applications should already be unavailable and the zedkube drain handler should just skip requesting drain. There is already a handler to skip drain for one kubernetes outage type and return complete but I do see how this needs to be expanded to handle more kubernetes outage cases and more clearly show that zedkube is allowing these controller operations to be handled as recovery mechanisms and not just maintenance.
The key here is to make sure to appropriately debounce these intermittent kubernetes api issues we could see due to timeouts. This pr has a constant value of 180 seconds but I'm going to move this to a config-property to allow some modification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of a different sequence.
- Cluster is running with three nodes.
- Controller (or LOC) requests a device reboot or EVE update of node X
- Drain starts
- Node Y fails/disappears from the cluster for any reason (hardware, software crash, network partition)
In such a case should we proceed with the device rebooot/update of node X? Or pause until Y is back in the cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to the detailed comments, I think the text in the PR description should go in a markdown file.
Also, at a high-level what are the pre-conditions for the reboot/shutdown/update?
Is it that all three nodes being healthy?
(That would ensure that if there is a hardware failure or crash of another node while this one is rebooting/updating that there is still one node left.)
If while the drain is in progress one of the other nodes in the cluster dies, should the reboot/update be deferred until that node is back?
pkg/pillar/cmd/zedkube/zedkube.go
Outdated
if override != nil { | ||
zedkubeCtx.pubNodeDrainStatus.Publish("global", override) | ||
} | ||
zedkubeCtx.drainOverrideTimer.Reset(5 * time.Minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't you Stop the timer here as well like in the initial case?
// Just print the transitions which are linked to lengthy operations or errors | ||
return | ||
} | ||
ctx.ph.Print("INFO: Node Drain -> %s at %v\n", newStatus.Status.String(), ts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this also go in the new console TUI? Check with @rucoder
Hi @eriknordmark thanks for the review, i'll address the docs changes and cleanup. |
As a part of kubevirt-eve we have multiple cluster nodes each hosting app workloads and volume replicas. This implements defer for eve mgmt config operations which will result in unavailability of storage replicas until the cluster volume is not running on a single replica. An example: 1. Node 1 outage and recovers. 2. Before volumes complete rebuilding on node 1 there is a node 2 outage and recovery. 3. Volumes begin rebuilding replicas on nodes 1 and 2. Only available rebuild source is on node 3. 4. User initiated request to reboot/shutdown/update eve-os on node 3. 5. That config request is set to defer until replicas are rebuilt on the other nodes. At a high level the eve-side workflow looks like this: 1. eve config received requesting reboot/shutdown/baseos-image-change to node 1 2. drain requested for node 1 3. zedkube cordons node 1 so that new workloads are blocked from scheduling on that node. 4. zedkube initiates a kubernetes drain of that node removing workloads 5. As a part of drain, PDB (Pod Disruption Budget) at longhorn level determines local replica is the last online one. 6. Drain waits for volume replicas to rebuild across the cluster. 7. Drain completes and NodeDrainStatus message sent to continue original config request. 8. On the next boot event zedkube nodeOnBootHealthStatusWatcher() waits until the local kubernetes node comes online/ready for the first time on each boot event and uncordons it, allowing workloads to be scheduled. Note: For eve baseos image updates this path waits until a new baseos image is fully available locally (LOADED or INSTALLED) and activated before beginning drain. Signed-off-by: Andrew Durbin <[email protected]>
Main dependencies listed here, others pulled in for version conflicts. k8s.io/kubectl/pkg/drain github.com/longhorn/longhorn-manager Signed-off-by: Andrew Durbin <[email protected]>
9c6952e
to
58a969d
Compare
As a part of kubevirt-eve we have multiple cluster nodes each hosting app workloads and volume replicas.
This implements defer for eve mgmt config operations which will result in unavailability of storage
replicas until the cluster volume is not running on a single replica.
An example:
At a high level the eve-side workflow looks like this:
Note: For eve baseos image updates this path waits until a new baseos image is fully available locally (LOADED or INSTALLED) and activated before beginning drain.
This PR implements original api added in #4366.